home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Internet Surfer 2.0
/
Internet Surfer 2.0 (Wayzata Technology) (1996).iso
/
pc
/
text
/
mac
/
faqs.457
< prev
next >
Wrap
Text File
|
1996-02-12
|
29KB
|
669 lines
Frequently Asked Questions (FAQS);faqs.457
My impression from the black and white plates of the Voynich MS I've seen, are
that the illustrations are very weird when compared to other 'illuminated'
manuscripts of this time. Particularly I would say that there is emphasis
on the female nude that is unusual for the art of this period. I can't say
that I myself believe the images to have ANYTHING to do with the text.
My own conjecture is that the manuscript is a one-way encipherment. A
cipher so clever that the inventor didn't even think of how it could be
deciphered. Sorta like an /etc/passwd file.
Bibliography
------------
1. William R. Newbold. _The Cipher of Roger Bacon_Roland G Kent, ed.
University of Pennsylvania Press, 1928.
2. Joseph Martin Feely. _Roger Bacon's Cipher: The Right Key Found_
Rochester N.Y.:Joseph Martin Feely, pub., 1943.
3. _The Most Mysterious Manuscript_ Robert S. Brumbaugh, ed. Southern Illinois
Press, 1978
Unix filters are so wonderful. Massaging the machine readable file, we find:
4182 "words", of which 1284 are used more than once, 308 used 8+ times,
184 used 15+ times, 23 used 100+ times.
Does this tell us anything about the language (if any) the text is written
in?
For those who may be interested, here are the 23 words used 100+ times:
121 2
115 4OFAE
114 4OFAM
155 4OFAN
195 4OFC89
162 4OFCC89
101 4OFCC9
189 89
111 8AE
492 8AM
134 8AN
156 8AR
248 OE
148 OR
111 S9
251 SC89
142 SC9
238 SOE
150 SOR
244 ZC89
116 ZC9
116 ZOE
Could someone email the Voynich Ms. ref list that appeared here not
very long ago? Thanks in advance...
Also... I came across the following ref that is fun(?):
The Voynich manuscript: an elegant enigma / M. E. D'Imperio
Fort George E. Mead, Md. : National Security Agency(!)
Central Security Service(?), 1978. ix, 140 p. : ill. ; 27 cm.
The (?!) are mine... Sorry if this was already on the list, but the
mention of the NSA (and what's the CSS?) made it jump out at me...
--
Ron Carter | rcarter@nyx.cs.du.edu rcarter GEnie 70707.3047 CIS
Director | Center for the Study of Creative Intelligence
Denver, CO | Knowledge is power. Knowledge to the people. Just say know.
Distribution: na
Organization: Wetware Diversions, San Francisco
Keywords:
From sci.archaeology:
>From: jamie@cs.sfu.ca (Jamie Andrews)
>Date: 16 Nov 91 00:49:08 GMT
>
> It seems like the person who would be most likely to solve
>this Voynich manuscript cipher would have
>(a) knowledge of the modern techniques for solving more complex
> ciphers such as Playfairs and Vigineres; and
>(b) knowledge of the possible contemporary and archaic languages
> in which the plaintext could have been written.
An extended discussion of the Voynich Manuscript may be found in the
tape of the same name by Terence McKenna. I'm not sure who is currently
publishing this particular McKenna tape but probably one of:
Dolphin Tapes, POB 71, Big Sur, CA 93920
Sounds True, 1825 Pearl St., Boulder, CO 80302
Sound Photosynthesis, POB 2111, Mill Valley, CA 94942
The Spring 1988 issue of Gnosis magazine contained an article by McKenna
giving some background of the Voynich Manuscipt and attempts to decipher
it, and reviewing Leo Levitov's "Solution of the Voynich Manuscript"
(published in 1987 by Aegean Park Press, POB 2837 Laguna Hills, CA 92654).
Levitov's thesis is that the manuscript is the only surviving primary
document of the Cathar faith (exterminated on the orders of the Pope in
the Albigensian Crusade in the 1230s) and that it is in fact not
encrypted material but rather is a highly polyglot form of Medieval
Flemish with a large number of Old French and Old High German loan
words, written in a special script.
As far as I know Levitov's there has been no challenge to Levitov's
claims so far.
Michael Barlow, who had reviewed Levitov's book in Cryptologia, had sent me
photocopies of the pages where much of the language was described
(pp.21-31). I have just found them, and am looking at them now as I am
typing this. Incidentally, I do not believe this has anything to do with
cryptology proper, but the decipherment of texts in unknown languages. So
if you are into cryptography proper, skip this.
Looking at the "Voynich alphabet" pp.25-27, I made a list of the letters of
the Voynich language as Levitov interprets them, and I added phonetic
descriptions of the sounds I *think* Levitov meant to describe. Here it is:
Letter# Phonetic Phonetic descriptions
(IPA) in linguists' jargon: in plain English:
1 a low open, central unrounded a as in father
e mid close, front, unrounded ay as in May
O mid open, back, rounded aw as in law
or o as in got
(British
pronunciation)
2 s unvoiced dental fricative s as in so
3 d voiced dental stop d
4 E mid, front, unrounded e as in wet
5 f unvoiced labiodental fricative f
6 i short, high open, front, i as in dim
unrounded
7 i: long, high, front, unrounded ea as in weak
8 i:E (?) I can't make head nor tail of Levitov's
explanations. Probably like "ei" in "weird"
dragging along the "e": "weeeird"! (British
pronunciation, with a silent "r")
9 C unvoiced palatal fricative ch in German ich
10 k unvoived velar stop k
11 l lateral, can't be more precise from
description, probably like l in "loony"
12 m voiced bilabial nasal m
13 n voiced dental nasal n
14 r (?) cannot tell precisely from Scottish r?
description Dutch r?
15 t no description; dental stop? t
16 t another form for #15 t
17 T (?) no description th as in this?
th as in thick?
18 TE (?) again, no description
or ET (?)
19 v voiced labiodental fricative v as in rave
20 v ditto, same as #19 ditto
(By now, you will have guessed what my conclusion about Levitov's
decipherment was)
In the column headed "Phonetic (IPA)" I have used capital letters for lack
of the special international phonetic symbols:
E for the Greek letter "epsilon"
O for the letter that looks like a mirror-image of "c"
C for c-cedilla
T for the Greek letter "theta"
The colon (:) means that the sound represented by the preceding letter is
long, e.g. "i:" is a long "i".
The rest, #21 to 25, are not "letters" proper, but represent groups
of two or more letters, just like #18 does. They are:
21 av
22a Ev
22b vE
23 CET
24 kET
25 sET
That gives us a language with 6 vowels: a (#1), e (#1 again), O (#1 again),
E (#4), i (#6), and i: (#7). Letter #8 is not a vowel, but a combination
of two vowels: i: (#7) and probably E (#4). Levitov writes that the
language is derived from Dutch. If so, it has lost the "oo" sound (English
spelling; "oe" in Dutch spelling), and the three front rounded vowels of
Dutch: u as in U ("you", polite), eu as in deur ("door"), u as in vlug
("quick"). Note that out of six vowels, three are confused under the same
letter (#1), even though they sound very different from one another: a, e,
O. Just imagine that you had no way of distinguishing between "last",
"lest" and "lost" when writing in English, and you'll have a fair idea of
the consequences.
Let us look at the consonants now. I will put them in a matrix, with the
points of articulation in one dimension, and the manner of articulation in
the other (it's all standard procedure when analyzing a language). Brackets
around a letter will mean that I could not tell where to place it exactly,
and just took a guess.
labial dental palatal velar
nasal m n
voiced stop d
unvoiced stop t k
voiced fricative v (T)
unvoiced fricative f s C
lateral l
trill (?) (r)
Note that there are only twelve consonant sounds. That is unheard of for a
European language. No European language has so few consonant sounds.
Spanish, which has very few sounds (only five vowels), has seventeen
distinct consonants sounds, plus two semi-consonants. Dutch has from 18 to
20 consonants (depending on speakers, and how you analyze the sounds.
Warning: I just counted them on the back of an envelope; I might have
missed one or two). What is also extraordinary in Levitov's language is
that it lacks a "g", and *BOTH* "b" and "p". I cannot think of one single
language in the world that lacks both "b" and "p". Levitov also says that
"m" occurs only word-finally, never at the beginning, nor in the middle of
a word. That's true: the letter he says is an "m" is always word-final in
the reproductions I have seen of the Voynich MS. But no language I know of
behaves like that. All have an "m" (except one American Indian language,
which is very famous for that, and the name of which escapes me right now),
but, if there is a position where "m" never appears in some languages, that
position is word-finally. Exactly the reverse of Levitov's language.
What does Levitov say about the origin of the language?
"The language was very much standardized. It was an application of a
polyglot oral tongue into a literary language which would be understandable
to people who did not understand Latin and to whom this language could be
read."
At first reading, I would dismiss it all as nonsense: "polyglot oral
tongue" means nothing in linguistics terms. But Levitov is a medical
doctor, so allowances must be made. The best meaning I can read into
"polyglot oral tongue" is "a language that had never been written before
and which had taken words from many different languages". That is perfectly
reasonable: English for one, has done that. Half its vocabulary is Norman
French, and some of the commonest words have non-Anglo-Saxon origins.
"Sky", for instance, is a Danish word. So far, so good.
Levitov continues: "The Voynich is actually a simple language because it
follows set rules and has a very limited vocabulary.... There is a
deliberate duality and plurality of words in the Voynich and much use of
apostrophism".
By "duality and plurality of words" Levitov means that the words are highly
ambiguous, most words having two or more different meanings. I can only
guess at what he means by apostrophism: running words together, leaving
bits out, as we do in English: can not --> cannot --> can't, is not -->
ain't.
Time for a tutorial in the Voynich language as I could piece it together
from Levitov's description. Because, according to Levitov, letter #1
represent 3 vowels sounds, I will represent it by just "a", but remember:
it can be pronounced a, e, or o. But I will distinguish, as does Levitov,
between the two letters which he says were both pronounced "v", using "v"
for letter #20 and "w" for letter #21.
Some vocabulary now. Some verbs first, which Levitov gives in the
infinitive. In the Voynich language the infinitive of verbs ends in -en,
just like in Dutch and in German. I have removed that grammatical ending in
the list which follows, and given probable etymologies in parentheses
(Levitov gives doesn't give any):
ad = to aid, help ("aid")
ak = to ache, pain ("ache")
al = to ail ("ail")
and = to undergo the "Endura" rite ("End[ura]", probably)
d = to die ("d[ie]")
fad = to be for help (from f= for and ad=aid)
fal = to fail ("fail")
fil = to be for illness (from: f=for and il=ill)
il = to be ill ("ill")
k = to understand ("ken", Dutch and German "kennen" meaning "to know")
l = to lie deathly ill, in extremis ("lie", "lay")
s = to see ("see", Dutch "zien")
t = to do, treat (German "tun" = to do)
v = to will ("will" or Latin "volo" perhaps)
vid = to be with death (from vi=with and d=die)
vil = to want, wish, desire (German "willen")
vis = to know ("wit", German "wissen", Dutch "weten")
vit = to know (ditto)
viT = to use (no idea, Latin "uti" perhaps?)
vi = to be the way (Latin "via")
eC = to be each ("each")
ai:a = to eye, look at ("eye", "oog" in Dutch)
en = to do (no idea)
Example given by Levitov: enden "to do to death" made up of "en"
(to do), "d" (to die) and "en" (infinitive ending). Well, to me,
that's doing it the hard way. What's wrong with just "enden" = to
end (German "enden", too!)
More vocabulary:
em = he or they (masculine) ("him")
er = her or they (feminine) ("her")
eT = it or they ("it" or perhaps "they" or Dutch "het")
an = one ("one", Dutch "een")
"There are no declensions of nouns or conjugation of verbs. Only the
present tense is used" says Levitov.
Examples:
den = to die (infinitive) (d = die, -en = infinitive)
deT = it/they die (d = die, eT = it/they)
diteT = it does die (d = die, t = do, eT = it/they, with an "i" added to
make it easier to pronounce, which is quite common and natural
in languages)
But Levitov contradicts himself immediately, giving another tense (known
as present progressive in English grammar):
dieT = it is dying
But I may be unfair there, perhaps it is a compound: d = die, i = is
...-ing, eT = it/they.
Plurals are formed by suffixing "s" in one part of the MS, "eT" in another:
"ans" or "aneT" = ones.
More:
wians = we ones (wi = we, wie in Dutch, an = one, s = plural)
vian = one way (vi = way, an = one)
wia = one who (wi = who, a = one)
va = one will (v = will, a = one)
wa = who
wi = who
wieT = who, it (wi = who, eT = it)
witeT = who does it (wi = who, t = do, eT = it/they)
weT = who it is (wi = who, eT = it, then loss of "i", giving "weT")
ker = she understands (k = understand, er =she)
At this stage I would like to comment that we are here in the presence of a
Germanic language which behaves very, very strangely in the way of the
meanings of its compound words. For instance, "viden" (to be with death) is
made up of the words for "with", "die" and the infinitive suffix. I am sure
that Levitov here was thinking of a construction like German "mitkommen"
which means "to come along" (to "withcome"). I suppose I could say "Bitte,
sterben Sie mit" on the same model as "Bitte, kommen Sie mit" ("Come with
me/us, please), thereby making up a verb "mitsterben", but that would mean
"to die together with someone else", not "to be with death".
Let us see how Levitov translates a whole sentence. Since he does not
explain how he breaks up those compound words I have tried to do it using
the vocabulary and grammar he provides in those pages. My tentative
explanations are in parenthesis.
TanvieT faditeT wan aTviteT anTviteT atwiteT aneT
TanvieT = the one way (T = the (?), an = one, vi =way, eT = it)
faditeT = doing for help (f = for, ad = aid, i = -ing, t = do, eT = it)
wan = person (wi/wa = who, an = one)
aTviteT = one that one knows (a = one, T = that, vit = know, eT = it.
Here, Levitov adds one extra letter which is not in the text,
getting "aTaviteT", which provide the second "one" of his
translation)
anTviteT = one that knows (an =one, T = that, vit = know, eT = it)
atwiteT = one treats one who does it (a = one, t = do, wi = who,
t = do, eT = it. Literally: "one does [one] who does it".
The first "do" is translated as "treat", the second "one" is
added in by Levitov: he added one letter, which gives him
"atawiteT")
aneT = ones (an = one, -eT = the plural ending)
Levitov's translation of the above in better English: "the one way for
helping a person who needs it, is to know one of the ones who do treat
one".
Need I say more? Does anyone still believe that Levitov's translations are
worth anything?
As an exercise, here is the last sentence on p.31, with its word-for-word
translation by Levitov. I leave you to work it out, and to figure out what
it might possibly mean. Good luck!
tvieT nwn anvit fadan van aleC
tvieT = do the ways
nwn = not who does (but Levitov adds a letter to make it "nwen")
anvit = one knows
fadan = one for help
van = one will
aleC = each ail
==> cryptology/swiss.colony.p <==
What are the 1987 Swiss Colony ciphers?
==> cryptology/swiss.colony.s <==
Did anyone solve the 1987 'Crypto-gift' contest that was run by
Swiss Colony? My friend and I worked on it for 4 months, but
didn't get anywhere. My friend solved the 1986 puzzle in
about a week and won $1000. I fear that we missed some clue that
makes it incredibly easy to solve. I'm including the code, clues
and a few notes for those of you so inclined to give it a shot.
197,333,318,511,824,
864,864,457,197,333,
824,769,372,769,864,
865,457,153,824,511,223,845,318,
489,953,234,769,703,489,845,703,
372,216,457,509,333,153,845,333,
511,864,621,611,769,707,153,333,
703,197,845,769,372,621,223,333,
197,845,489,953,223,769,216,223,
769,769,457,153,824,511,372,223,
769,824,824,216,865,845,153,769,
333,704,511,457,153,333,824,333,
953,372,621,234,953,234,865,703,
318,223,333,489,944,153,824,769,
318,457,234,845,318,223,372,769,
216,894,153,333,511,611,
769,704,511,153,372,621,
197,894,894,153,333,953,
234,845,318,223
CHRIS IS BACK WITH GOLD FOR YOU
HIS RHYMES CONTAIN THE SECRET.
YOU SCOUTS WHO'VE EARNED YOUR MERIT BADGE
WILL QUICKLY LEARN TO READ IT.
SO WHEN YOUR CHRISTMAS HAM'S ALL GONE
AND YOU'RE READY FOR THE TUSSLE,
BALL UP YOUR HAND INTO A FIST
AND SHOW OUR MOUSE YOUR MUSCLE.
PLEASE READ THESE CLUES WE LEAVE TO YOU
BOTH FINE ONES AND THE COARSE;
IF CARE IS USED TO HEED THEM ALL
YOU'LL SUFFER NO REMORSE.
Notes:
The puzzle comes as a jigsaw that when assembled has the list of
numbers. They are arranged as indicated on the puzzle, with commas.
The lower right corner has a drawing of 'Secret Agent Chris Mouse'.
He holds a box under his arm which looks like the box
the puzzle comes in. The upper left
corner has the words 'NEW 1987 $50,000 Puzzle'. The lower
left corner is empty. The clues are printed on the
entry form in upper case, with the punctuation as shown.
Ed Rupp
...!ut-sally!oakhill!ed
Motorola, Inc., Austin Tx.
==> decision/allais.p <==
The Allais Paradox involves the choice between two alternatives:
A. 89% chance of an unknown amount
10% chance of $1 million
1% chance of $1 million
B. 89% chance of an unknown amount (the same amount as in A)
10% chance of $2.5 million
1% chance of nothing
What is the rational choice? Does this choice remain the same if the
unknown amount is $1 million? If it is nothing?
==> decision/allais.s <==
This is "Allais' Paradox".
Which choice is rational depends upon the subjective value of money.
Many people are risk averse, and prefer the better chance of $1
million of option A. This choice is firm when the unknown amount is
$1 million, but seems to waver as the amount falls to nothing. In the
latter case, the risk averse person favors B because there is not much
difference between 10% and 11%, but there is a big difference between
$1 million and $2.5 million.
Thus the choice between A and B depends upon the unknown amount, even
though it is the same unknown amount independent of the choice. This
violates the "independence axiom" that rational choice between two
alternatives should depend only upon how those two alternatives
differ.
However, if the amounts involved in the problem are reduced to tens of
dollars instead of millions of dollars, people's behavior tends to
fall back in line with the axioms of rational choice. People tend to
choose option B regardless of the unknown amount. Perhaps when
presented with such huge numbers, people begin to calculate
qualitatively. For example, if the unknown amount is $1 million the
options are:
A. a fortune, guaranteed
B. a fortune, almost guaranteed
a tiny chance of nothing
Then the choice of A is rational. However, if the unknown amount is
nothing, the options are:
A. small chance of a fortune ($1 million)
large chance of nothing
B. small chance of a larger fortune ($2.5 million)
large chance of nothing
In this case, the choice of B is rational. The Allais Paradox then
results from the limited ability to rationally calculate with such
unusual quantities. The brain is not a calculator and rational
calculations may rely on things like training, experience, and
analogy, none of which would be help in this case. This hypothesis
could be tested by studying the correlation between paradoxical
behavior and "unusualness" of the amounts involved.
If this explanation is correct, then the Paradox amounts to little
more than the observation that the brain is an imperfect rational
engine.
==> decision/division.p <==
N-Person Fair Division
If two people want to divide a pie but do not trust each other, they can
still ensure that each gets a fair share by using the technique that one
person cuts and the other person chooses. Generalize this technique
to more than two people. Take care to ensure that no one can be cheated
by a coalition of the others.
==> decision/division.s <==
N-Person Fair Division
Number the people from 1 to N. Person 1 cuts off a piece of the pie.
Person 2 can either diminish the size of the cut off piece or pass.
The same for persons 3 through N. The last person to touch the piece
must take it and is removed from the process. Repeat this procedure
with the remaining N - 1 people, until everyone has a piece.
(cf. Luce and Raiffa, "Games and Decisions", Wiley, 1957, p. 366)
There is a cute result in combinatorics called the Marriage Theorem.
A village has n men and n women, such that for all 0 < k <= n and for any
set of k men there are at least k women, each of whom is in love with at least
one of the k men. All of the men are in love with all of the women :-}.
The theorem asserts that there is a way to arrange the village into n
monogamous couplings.
The Marriage Theorem can be applied to the Fair Pie-Cutting Problem.
One player cuts the pie into n pieces. Each of the players labels
some non-null subset of the pieces as acceptable to him. For reasons
given below he should "accept" each piece of size > 1/n, not just the
best piece(s). The pie-cutter is required to "accept" all of the pieces.
Given a set S of players let S' denote the set of pie-pieces
acceptable to at least one player in S. Let t be the size of the largest
set (T) of players satisfying |T| > |T'|. If there is no such set, the
Marriage Theorem can be applied directly. Since the pie-cutter accepts
every piece we know that t < n.
Choose |T| - |T'| pieces at random from outside T', glue them
together with the pieces in T' and let the players in T repeat the game
with this smaller (t/n)-size pie. This is fair since they all rejected
the other n-t pieces, so they believe this pie is larger than t/n.
The remaining n-t players can each be assigned one of the remaining
n-t pie-pieces without further ado due to the Marriage Theorem. (Otherwise
the set T above was not maximal.)
==> decision/dowry.p <==
Sultan's Dowry
A sultan has granted a commoner a chance to marry one of his hundred
daughters. The commoner will be presented the daughters one at a time.
When a daughter is presented, the commoner will be told the daughter's
dowry. The commoner has only one chance to accept or reject each
daughter; he cannot return to a previously rejected daughter.
The sultan's catch is that the commoner may only marry the daughter with
the highest dowry. What is the commoner's best strategy assuming
he knows nothing about the distribution of dowries?
==> decision/dowry.s <==
Solution
Since the commoner knows nothing about the distribution of the dowries,
the best strategy is to wait until a certain number of daughters have
been presented then pick the highest dowry thereafter. The exact number to
skip is determined by the condition that the odds that the highest dowry
has already been seen is just greater than the odds that it remains to be
seen AND THAT IF IT IS SEEN IT WILL BE PICKED. This amounts to finding the
smallest x such that:
x/n > x/n * (1/(x+1) + ... + 1/(n-1)).
Working out the math for n=100 and calculating the probability gives:
The commoner should wait until he has seen 37 of the daughters,
then pick the first daughter with a dowry that is bigger than any
preceding dowry. With this strategy, his odds of choosing the daughter
with the highest dowry are surprisingly high: about 37%.
(cf. F. Mosteller, "Fifty Challenging Problems in Probability with Solutions",
Addison-Wesley, 1965, #47; "Mathematical Plums", edited by Ross Honsberger,
pp. 104-110)
==> decision/envelope.p <==
Someone has prepared two envelopes containing money. One contains twice as
much money as the other. You have decided to pick one envelope, but then the
following argument occurs to you: Suppose my chosen envelope contains $X,
then the other envelope either contains $X/2 or $2X. Both cases are
equally likely, so my expectation if I take the other envelope is
.5 * $X/2 + .5 * $2X = $1.25X, which is higher than my current $X, so I
should change my mind and take the other envelope. But then I can apply the
argument all over again. Something is wrong here! Where did I go wrong?
In a variant of this problem, you are allowed to peek into the envelope
you chose before finally settling on it. Suppose that when you peek you
see $100. Should you switch now?
==> decision/envelope.s <==
Let's follow the argument carefully, substituting real numbers for
variables, to see where we went wrong. In the following, we will assume
the envelopes contain $100 and $200. We will consider the two equally
likely cases separately, then average the results.
First, take the case that X=$100.
"I have $100 in my hand. If I exchange I get $200. The value of the exchange
is $200. The value from not exchanging is $100. Therefore, I gain $100
by exchanging."
Second, take the case that X=$200.
"I have $200 in my hand. If I exchange I get $100. The value of the exchange
is $100. The value from not exchanging is $200. Therefore, I lose $100
by exchanging."
Now, averaging the two cases, I see that the expected gain is zero.
So where is the slip up? In one case, switching gets X/2 ($100), in the
other case, switching gets 2X ($200), but X is different in the two
cases, and I can't simply average the two different X's to get 1.25X.
I can average the two numbers ($100 and $200) to get $150, the expected
value of switching, which is also the expected value of not switching,
but I cannot under any circumstances average X/2 and 2X.
This is a classic case of confusing variables with constants.
OK, so let's consider the case in which I looked into the envelope and
found that it contained $100. This pins down what X is: a constant.
Now the argument is that the odds of $50 is .5 and the odds of $200
is .5, so the expected value of switching is $125, so we should switch.
However, the only way the odds of $50 could be .5 and the odds of $200
could be .5 is if all integer values are equally likely. But any
probability distribution that is finite and equal for all integers
would sum to infinity, not one as it must to be a probability distribution.
Thus, the assumption of equal likelihood for all integer values is
self-contradictory, and leads to the invalid proof that you should
always switch. This is reminiscent of the plethora of proofs that 0=1;
they always involve some illegitimate assumption, such as the validity
of division by zero.
Limiting the maximum value in the envelopes removes the self-contradiction
and the argument for switching. Let's see how this works.
Suppose all amounts up to $1 trillion were equally likely to be
found in the first envelope, and all amounts beyond that would never
appear. Then for small amounts one should indeed switch, but not for
amounts above $500 billion. The strategy of always switching would pay
off for most reasonable amounts but would lead to disastrous losses for
large amounts, and the two would balance each other out.